Dual Fixed-Point: An Efficient Alternative to Floating-Point Computation
نویسندگان
چکیده
This paper presents a new data representation known as Dual FiXedpoint (DFX), which employs a single bit exponent to select two different fixedpoint scalings. DFX provides a compromise between conventional fixed-point and floating-point representations. It has the implementation complexity similar to that of a fixed-point system together with the improved dynamic range offered by a floating-point system. The benefit of using DFX over both fixed-point and floating-point is demonstrated with an IIR filter implementation on a Xilinx Virtex II FPGA.
منابع مشابه
Fixed-point FPGA Implementation of a Kalman Filter for Range and Velocity Estimation of Moving Targets
Tracking filters are extensively used within object tracking systems in order to provide consecutive smooth estimations of position and velocity of the object with minimum error. Namely, Kalman filter and its numerous variants are widely known as simple yet effective linear tracking filters in many diverse applications. In this paper, an effective method is proposed for designing and implementa...
متن کاملReducing Latency, Power, and Gate Count with the Tensilica Floating-Point FMA
Today’s digital signal processing applications such as radar, echo cancellation, and image processing are demanding more dynamic range and computation accuracy. Floating-point arithmetic units offer better precision, higher dynamic range, and shorter development cycles when compared to fixed-point arithmetic units. Minimizing the design’s time to market is more important than ever. Algorithm de...
متن کاملA Space-Efficient Design for Reversible Floating Point Adder in Quantum Computing
Reversible logic has applications in low-power computing and quantum computing. However, there are few existing designs for reversible floating-point adders and none suitable for quantum computation. In this paper we propose a space-efficient reversible floating-point adder, suitable for binary quantum computation, improving the design of Nachtigal et al. [13]. Our work focuses on improving the...
متن کاملProfiling floating point value ranges for reconfigurable implementation
Reconfigurable architectures offer potential for performance enhancement by specializing the implementation of floating-point arithmetic. This paper presents FloatWatch, a dynamic execution profiling tool designed to identify where an application can benefit from reduced precision or reduced range in floating-point computations. FloatWatch operates on x86 binaries, and generates a profile outpu...
متن کاملDeep Convolutional Neural Network Inference with Floating-point Weights and Fixed-point Activations
Deep convolutional neural network (CNN) inference requires significant amount of memory and computation, which limits its deployment on embedded devices. To alleviate these problems to some extent, prior research utilize low precision fixed-point numbers to represent the CNN weights and activations. However, the minimum required data precision of fixed-point weights varies across different netw...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004